user retention
Save, Revisit, Retain: A Scalable Framework for Enhancing User Retention in Large-Scale Recommender Systems
Jiang, Weijie, Ordorica, Armando, Yang, Jaewon, Gudmundsson, Olafur, Tu, Yucheng, Duan, Huizhong
User retention is a critical objective for online platforms like Pinterest, as it strengthens user loyalty and drives growth through repeated engagement. A key indicator of retention is revisitation, i.e., when users return to view previously saved content, a behavior often sparked by personalized recommendations and user satisfaction. However, modeling and optimizing revisitation poses significant challenges. One core difficulty is accurate attribution: it is often unclear which specific user actions or content exposures trigger a revisit, since many confounding factors (e.g., content quality, user interface, notifications, or even changing user intent) can influence return behavior. Additionally, the scale and timing of revisitations introduce further complexity; users may revisit content days or even weeks after their initial interaction, requiring the system to maintain and associate extensive historical records across millions of users and sessions. These complexities render existing methods insufficient for robustly capturing and optimizing long-term revisitation. To address these gaps, we introduce a novel, lightweight, and interpretable framework for modeling revisitation behavior and optimizing long-term user retention in Pinterest's search-based recommendation context. By defining a surrogate attribution process that links saves to subsequent revisitations, we reduce noise in the causal relationship between user actions and return visits. Our scalable event aggregation pipeline enables large-scale analysis of user revisitation patterns and enhances the ranking system's ability to surface items with high retention value. Deployed on Pinterest's Related Pins surface to serve 500+ million users, the framework led to a significant lift of 0.1% in active users without additional computational costs.
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.72)
Integrating LLMs in Gamified Systems
In this work, a thorough mathematical framework for incorporating Large Language Models (LLMs) into gamified systems is presented with an emphasis on improving task dynamics, user engagement, and reward systems. Personalized feedback, adaptive learning, and dynamic content creation are all made possible by integrating LLMs and are crucial for improving user engagement and system performance. A simulated environment tests the framework's adaptability and demonstrates its potential for real-world applications in various industries, including business, healthcare, and education. The findings demonstrate how LLMs can offer customized experiences that raise system effectiveness and user retention. This study also examines the difficulties this framework aims to solve, highlighting its importance in maximizing involvement and encouraging sustained behavioral change in a range of sectors.
- Europe > Portugal > Lisbon > Lisbon (0.14)
- Asia > Singapore (0.04)
- North America > United States > Hawaii (0.04)
- North America > United States > California > San Francisco County > San Francisco (0.04)
- Research Report > Experimental Study (0.48)
- Research Report > New Finding (0.34)
- Leisure & Entertainment > Games > Computer Games (1.00)
- Health & Medicine (1.00)
- Education > Educational Setting (0.93)
- Education > Educational Technology > Educational Software > Computer Based Training (0.68)
DarkBench: Benchmarking Dark Patterns in Large Language Models
Kran, Esben, Nguyen, Hieu Minh "Jord", Kundu, Akash, Jawhar, Sami, Park, Jinsuk, Jurewicz, Mateusz Maria
Measuring these dark patterns is essential for understanding and mitigating the potential manipulative behaviors of LLMs. While some patterns, like Brand Bias and User Retention, were adapted directly from known dark patterns in UI/UX, others, like Harmful Generation and Anthropomorphization, represent critical risks not explicitly addressed in Brignull and Darlo (2010)'s taxonomy. Table 4 demonstrates how these categories map to or expand on established dark patterns, providing a foundation for their inclusion. However, some risks, particularly Anthropomorphization and Harmful Generation, require additional justification. Anthropomorphization, the attribution of human-like characteristics to AI systems, has been identified as a key factor in enhancing user engagement and trust.
- North America > United States > New York > New York County > New York City (0.04)
- Asia > Singapore (0.04)
- Law (0.93)
- Health & Medicine > Consumer Health (0.46)
Unveiling the Impact of Multi-Modal Interactions on User Engagement: A Comprehensive Evaluation in AI-driven Conversations
Zhang, Lichao, Yu, Jia, Zhang, Shuai, Li, Long, Zhong, Yangyang, Liang, Guanbao, Yan, Yuming, Ma, Qing, Weng, Fangsheng, Pan, Fayu, Li, Jing, Xu, Renjun, Lan, Zhenzhong
Large Language Models (LLMs) have significantly advanced user-bot interactions, enabling more complex and coherent dialogues. However, the prevalent text-only modality might not fully exploit the potential for effective user engagement. This paper explores the impact of multi-modal interactions, which incorporate images and audio alongside text, on user engagement in chatbot conversations. We conduct a comprehensive analysis using a diverse set of chatbots and real-user interaction data, employing metrics such as retention rate and conversation length to evaluate user engagement. Our findings reveal a significant enhancement in user engagement with multi-modal interactions compared to text-only dialogues. Notably, the incorporation of a third modality significantly amplifies engagement beyond the benefits observed with just two modalities. These results suggest that multi-modal interactions optimize cognitive processing and facilitate richer information comprehension. This study underscores the importance of multi-modality in chatbot design, offering valuable insights for creating more engaging and immersive AI communication experiences and informing the broader AI community about the benefits of multi-modal interactions in enhancing user engagement.
- Europe > United Kingdom (0.04)
- Asia > Malaysia (0.04)
- Asia > Japan > Shikoku > Ehime Prefecture > Matsuyama (0.04)
Ansible Lightspeed: A Code Generation Service for IT Automation
Sahoo, Priyam, Pujar, Saurabh, Nalawade, Ganesh, Gebhardt, Richard, Mandel, Louis, Buratti, Luca
The availability of Large Language Models (LLMs) which can generate code, has made it possible to create tools that improve developer productivity. Integrated development environments or IDEs which developers use to write software are often used as an interface to interact with LLMs. Although many such tools have been released, almost all of them focus on general-purpose programming languages. Domain-specific languages, such as those crucial for IT automation, have not received much attention. Ansible is one such YAML-based IT automation-specific language. Red Hat Ansible Lightspeed with IBM Watson Code Assistant, further referred to as Ansible Lightspeed, is an LLM-based service designed explicitly for natural language to Ansible code generation. In this paper, we describe the design and implementation of the Ansible Lightspeed service and analyze feedback from thousands of real users. We examine diverse performance indicators, classified according to both immediate and extended utilization patterns along with user sentiments. The analysis shows that the user acceptance rate of Ansible Lightspeed suggestions is higher than comparable tools that are more general and not specific to a programming language. This remains true even after we use much more stringent criteria for what is considered an accepted model suggestion, discarding suggestions which were heavily edited after being accepted. The relatively high acceptance rate results in higher-than-expected user retention and generally positive user feedback. This paper provides insights on how a comparatively small, dedicated model performs on a domain-specific language and more importantly, how it is received by users.
- Asia > India (0.04)
- North America > United States > New York (0.04)
- Europe > Switzerland (0.04)
Safer Conversational AI as a Source of User Delight
Lu, Xiaoding, Korshuk, Aleksey, Liu, Zongyi, Beauchamp, William, Research, Chai
This work explores the impact of moderation on users' enjoyment of conversational AI systems. While recent advancements in Large Language Models (LLMs) have led to highly capable conversational AIs that are increasingly deployed in real-world settings, there is a growing concern over AI safety and the need to moderate systems to encourage safe language and prevent harm. However, some users argue that current approaches to moderation limit the technology, compromise free expression, and limit the value delivered by the technology. This study takes an unbiased stance and shows that moderation does not necessarily detract from user enjoyment. Heavy handed moderation does seem to have a nefarious effect, but models that are moderated to be safer can lead to a better user experience. By deploying various conversational AIs in the Chai platform, the study finds that user retention can increase with a level of moderation and safe system design. These results demonstrate the importance of appropriately defining safety in models in a way that is both responsible and focused on serving users.
- Oceania > Australia > Victoria > Melbourne (0.04)
- North America > United States > Washington > King County > Seattle (0.04)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.04)
- (8 more...)
- Research Report (0.70)
- Overview > Growing Problem (0.34)
Rewarding Chatbots for Real-World Engagement with Millions of Users
Irvine, Robert, Boubert, Douglas, Raina, Vyas, Liusie, Adian, Zhu, Ziyi, Mudupalli, Vineet, Korshuk, Aliaksei, Liu, Zongyi, Cremer, Fritz, Assassi, Valentin, Beauchamp, Christie-Carol, Lu, Xiaoding, Rialan, Thomas, Beauchamp, William
The emergence of pretrained large language models has led to the deployment of a range of social chatbots for chitchat. Although these chatbots demonstrate language ability and fluency, they are not guaranteed to be engaging and can struggle to retain users. This work investigates the development of social chatbots that prioritize user engagement to enhance retention, specifically examining the use of human feedback to efficiently develop highly engaging chatbots. The proposed approach uses automatic pseudo-labels collected from user interactions to train a reward model that can be used to reject low-scoring sample responses generated by the chatbot model at inference time. Intuitive evaluation metrics, such as mean conversation length (MCL), are introduced as proxies to measure the level of engagement of deployed chatbots. A/B testing on groups of 10,000 new daily chatbot users on the Chai Research platform shows that this approach increases the MCL by up to 70%, which translates to a more than 30% increase in user retention for a GPT-J 6B model. Future work aims to use the reward model to realise a data fly-wheel, where the latest user conversations can be used to alternately fine-tune the language model and the reward model.
- Europe > Ireland > Leinster > County Dublin > Dublin (0.04)
- North America > United States > Washington > King County > Seattle (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (4 more...)
Reinforcing User Retention in a Billion Scale Short Video Recommender System
Cai, Qingpeng, Liu, Shuchang, Wang, Xueliang, Zuo, Tianyou, Xie, Wentao, Yang, Bin, Zheng, Dong, Jiang, Peng, Gai, Kun
Recently, short video platforms have achieved rapid user growth by recommending interesting content to users. The objective of the recommendation is to optimize user retention, thereby driving the growth of DAU (Daily Active Users). Retention is a long-term feedback after multiple interactions of users and the system, and it is hard to decompose retention reward to each item or a list of items. Thus traditional point-wise and list-wise models are not able to optimize retention. In this paper, we choose reinforcement learning methods to optimize the retention as they are designed to maximize the long-term performance. We formulate the problem as an infinite-horizon request-based Markov Decision Process, and our objective is to minimize the accumulated time interval of multiple sessions, which is equal to improving the app open frequency and user retention. However, current reinforcement learning algorithms can not be directly applied in this setting due to uncertainty, bias, and long delay time incurred by the properties of user retention. We propose a novel method, dubbed RLUR, to address the aforementioned challenges. Both offline and live experiments show that RLUR can significantly improve user retention. RLUR has been fully launched in Kuaishou app for a long time, and achieves consistent performance improvement on user retention and DAU.
- Asia > China > Beijing > Beijing (0.06)
- North America > United States > Texas > Travis County > Austin (0.05)
- Asia > Myanmar > Tanintharyi Region > Dawei (0.05)
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
[Product Roadmap] How CleverTap used AI and ML to help Nykaa, Zomato, AirAsia, and Dream 11 engage users
Anand Jain, Sunil Thomas, and Suresh Kondamudi were unanimous on one thing: high quality user experience. Years of experience had taught the trio that user experience could be achieved only when an app was useful, relevant, and timely. This meant an app that "understands the user", and is able to show the right content to the right user, on the right channel, and at the right time. This necessitated a foundation built on data. "So, the three of us left our corporate careers and decided to dedicate a year to start CleverTap," Sunil says.
- Consumer Products & Services > Travel (0.51)
- Information Technology > Software (0.49)
- Transportation > Passenger (0.41)
- Transportation > Air (0.41)
My Experience as a Product Data Analyst
Unlike universal marketing concepts such as SEO and SEM, each product is different and has a learning curve for you to understand what the product is for and how users interact with it. Allocate extra time you'll need for onboarding to learn user flows and the systems used for event tracking. When I started as a product analyst, it took weeks to learn how to effectively use the product analytics software and figure out the event names that mapped to each screen in the user flows. Supporting product gives you an opportunity to try it before applying for the position. Having an interest in the product or industry will help you be a more effective analyst because you can relate to the user experience and account for that in your analysis.